7 research outputs found

    Supporting Collaborative Innovation at Scale

    Full text link
    Emerging online innovation platforms have enabled large groups of people to collaborate and generate ideas together in ways that were not possible before. However, these platforms also introduce new challenges in finding inspiration from a large number of ideas, and coordinating the collective effort. In my dissertation, I address the challenges of large scale idea generation platforms by developing methods and systems for helping people make effective use of each other’s ideas, and for orchestrating collective effort to reduce redundancy and increase the quality and breadth of generated ideas

    Toward collaborative ideation at scale: Leveraging ideas from others to generate more creative and diverse ideas

    Get PDF
    ABSTRACT A growing number of large collaborative idea generation platforms promise that by generating ideas together, people can create better ideas than any would have alone. But how might these platforms best leverage the number and diversity of contributors to help each contributor generate even better ideas? Prior research suggests that seeing particularly creative or diverse ideas from others can inspire you, but few scalable mechanisms exist to assess diversity. We contribute a new scalable crowd-powered method for evaluating the diversity of sets of ideas. The method relies on similarity comparisons (is idea A more similar to B or C?) generated by non-experts to create an abstract spatial idea map. Our validation study reveals that human raters agree with the estimates of dissimilarity derived from our idea map as much or more than they agree with each other. People seeing the diverse sets of examples from our idea map generate more diverse ideas than those seeing randomly selected examples. Our results also corroborate findings from prior research showing that people presented with creative examples generated more creative ideas than those who saw a set of random examples. We see this work as a step toward building more effective online systems for supporting large scale collective ideation

    Beyond Summarization: Designing AI Support for Real-World Expository Writing Tasks

    Full text link
    Large language models have introduced exciting new opportunities and challenges in designing and developing new AI-assisted writing support tools. Recent work has shown that leveraging this new technology can transform writing in many scenarios such as ideation during creative writing, editing support, and summarization. However, AI-supported expository writing--including real-world tasks like scholars writing literature reviews or doctors writing progress notes--is relatively understudied. In this position paper, we argue that developing AI supports for expository writing has unique and exciting research challenges and can lead to high real-world impacts. We characterize expository writing as evidence-based and knowledge-generating: it contains summaries of external documents as well as new information or knowledge. It can be seen as the product of authors' sensemaking process over a set of source documents, and the interplay between reading, reflection, and writing opens up new opportunities for designing AI support. We sketch three components for AI support design and discuss considerations for future research.Comment: 3 pages, 1 figure, accepted by The Second Workshop on Intelligent and Interactive Writing Assistant

    The Semantic Reader Project: Augmenting Scholarly Documents through AI-Powered Interactive Reading Interfaces

    Full text link
    Scholarly publications are key to the transfer of knowledge from scholars to others. However, research papers are information-dense, and as the volume of the scientific literature grows, the need for new technology to support the reading process grows. In contrast to the process of finding papers, which has been transformed by Internet technology, the experience of reading research papers has changed little in decades. The PDF format for sharing research papers is widely used due to its portability, but it has significant downsides including: static content, poor accessibility for low-vision readers, and difficulty reading on mobile devices. This paper explores the question "Can recent advances in AI and HCI power intelligent, interactive, and accessible reading interfaces -- even for legacy PDFs?" We describe the Semantic Reader Project, a collaborative effort across multiple institutions to explore automatic creation of dynamic reading interfaces for research papers. Through this project, we've developed ten research prototype interfaces and conducted usability studies with more than 300 participants and real-world users showing improved reading experiences for scholars. We've also released a production reading interface for research papers that will incorporate the best features as they mature. We structure this paper around challenges scholars and the public face when reading research papers -- Discovery, Efficiency, Comprehension, Synthesis, and Accessibility -- and present an overview of our progress and remaining open challenges

    Toward collaborative ideation at scale: Leveraging ideas from others to generate more creative and diverse ideas

    No full text
    ABSTRACT A growing number of large collaborative idea generation platforms promise that by generating ideas together, people can create better ideas than any would have alone. But how might these platforms best leverage the number and diversity of contributors to help each contributor generate even better ideas? Prior research suggests that seeing particularly creative or diverse ideas from others can inspire you, but few scalable mechanisms exist to assess diversity. We contribute a new scalable crowd-powered method for evaluating the diversity of sets of ideas. The method relies on similarity comparisons (is idea A more similar to B or C?) generated by non-experts to create an abstract spatial idea map. Our validation study reveals that human raters agree with the estimates of dissimilarity derived from our idea map as much or more than they agree with each other. People seeing the diverse sets of examples from our idea map generate more diverse ideas than those seeing randomly selected examples. Our results also corroborate findings from prior research showing that people presented with creative examples generated more creative ideas than those who saw a set of random examples. We see this work as a step toward building more effective online systems for supporting large scale collective ideation

    Learning from Team and Group Diversity: Nurturing and Benefiting from our Heterogeneity

    Full text link
    By 2019, diversity is an established fact in most workplaces, teams, and work-groups, presenting both old and new challenges to CSCW in terms of team structure and technological supports for increasingly diverse teams. The research literature on diversity and teams has examined many definitions and attributes of diversity, and has described different types of teams, tasks, and measures, with contrasting and even contradictory results. Diversity becomes a strength in some studies, and a burden in others. The literature is similarly complex regarding individual and organizational approaches to realize those strengths, or to mitigate those burdens. In this workshop, we collectively take stock of these complex findings; we consider the several theoretical and methodological efforts to organize these findings; and we propose new research directions to address the “diversity of diversity studies.”Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/150491/1/Muller et al. 2019.pd

    Creativity on paid crowdsourcing platforms

    No full text
    Abstract Crowdsourcing platforms are increasingly being harnessed for creative work. The platforms’ potential for creative work is clearly identified, but the workers’ perspectives on such work have not been extensively documented. In this paper, we uncover what the workers have to say about creative work on paid crowdsourcing platforms. Through a quantitative and qualitative analysis of a questionnaire launched on two different crowdsourcing platforms, our results revealed clear differences between the workers on the platforms in both preferences and prior experience with creative work. We identify common pitfalls with creative work on crowdsourcing platforms, provide recommendations for requesters of creative work, and discuss the meaning of our findings within the broader scope of creativity-oriented research. To the best of our knowledge, we contribute the first extensive worker-oriented study of creative work on paid crowdsourcing platforms
    corecore